Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 152: 106337, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36502695

RESUMO

Immunotherapy targeting immune checkpoint proteins, such as programmed cell death ligand 1 (PD-L1), has shown impressive outcomes in many clinical trials but only 20%-40% of patients benefit from it. Utilizing Combined Positive Score (CPS) to evaluate PD-L1 expression in tumour biopsies to identify patients with the highest likelihood of responsiveness to anti-PD-1/PD-L1 therapy has been approved by the Food and Drug Administration for several solid tumour types. Current CPS workflow requires a pathologist to manually score the two-colour PD-L1 chromogenic immunohistochemistry image. Multiplex immunofluorescence (mIF) imaging reveals the expression of an increased number of immune markers in tumour biopsies and has been used extensively in immunotherapy research. Recent rapid progress of Artificial Intelligence (AI)-based imaging analysis, particularly Deep Learning, provides cost effective and high-quality solutions to healthcare. In this article, we propose an imaging pipeline that utilizes three-colour mIF images (DAPI, PD-L1, and Pan-cytokeratin) as input and predicts the CPS using AI techniques. Our novel pipeline is composed of three modules employing algorithms of image processing, machine learning, and deep learning techniques. The first module of quality check (QC) detects and removes the image regions contaminated with sectioning and staining artefacts. The QC module ensures that only image regions free of the three common artefacts are used for downstream analysis. The second module of nuclear segmentation uses deep learning to segment and count nuclei in the DAPI images wherein our specialized method can accurately separate touching nuclei. The third module of cell phenotyping calculates CPS by identifying and counting PD-L1 positive cells and tumour cells. These modules are data-efficient and require only few manual annotations for training purposes. Using tumour biopsies from a clinical trial, we found that the CPS from the AI-based models shows a high Spearman correlation (78%, p = 0.003) to the pathologist-scored CPS.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Antígeno B7-H1/metabolismo , Neoplasias/diagnóstico por imagem , Imuno-Histoquímica , Imunofluorescência , Biomarcadores Tumorais/metabolismo
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1711-1714, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891616

RESUMO

Molecular profiling of the tumor in addition to the histological tumor analysis can provide robust information for targeted cancer therapies. Often such data are not available for analysis due to processing delays, cost or inaccessibility. In this paper, we proposed a deep learning-based method to predict RNA-sequence expression (RNA-seq) from Hematoxylin and Eosin whole-slide images (H&E WSI) in head and neck cancer patients. Conventional methods utilize a patch-by-patch prediction and aggregation strategy to predict RNA-seq at a whole-slide level. However, these methods lose spatial-contextual relationships between patches that comprise morphology interactions crucial for predicting RNA-seq. We proposed a novel framework that employs a neural image compressor to preserve the spatial relationships between patches and generate a compressed representation of the whole-slide image, and a customized deep-learning regressor to predict RNA-seq from the compressed representation by learning both global and local features. We tested our proposed method on publicly available TCGA-HNSC dataset comprising 43 test patients for 10 oncogenes. Our experiments showed that the proposed method achieves a 4.12% higher mean correlation and predicts 6 out of 10 genes with better correlation than a state-of-the-art baseline method. Furthermore, we provided interpretability using pathway analysis of the best-predicted genes, and activation maps to highlight the regions in an H&E image that are the most salient of the RNA-seq prediction.Clinical relevance-The proposed method has the potential to discover genetic biomarkers directly from the histopathology images which could be used to pre-screen the patients before actual genetic testing thereby saving cost and time.


Assuntos
Neoplasias de Cabeça e Pescoço , Amarelo de Eosina-(YS) , Neoplasias de Cabeça e Pescoço/genética , Hematoxilina , Humanos , RNA/genética
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3475-3478, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891988

RESUMO

Automated nuclei segmentation from immunofluorescence (IF) microscopic image is a crucial first step in digital pathology. A lot of research has been devoted to develop novel nuclei segmentation algorithms to give high performance on good quality images. However, fewer methods were developed for poor-quality images like out-of-focus (blurry) data. In this work, we take a principled approach to study the performance of nuclei segmentation algorithms on out-of-focus images for different levels of blur. A deep learning encoder-decoder framework with a novel Y forked decoder is proposed here. The two fork ends are tied to segmentation and deblur output. The addition of a separate deblurring task in the training paradigm helps to regularize the network on blurry images. Our proposed method accurately predicts the instance nuclei segmentation on sharp as well as out-of-focus images. Additionally, predicted deblurred image provides interpretable insights to experts. Experimental analysis on the Human U2OS cells (out-of-focus) dataset shows that our algorithm is robust and outperforms the state-of-the-art methods.


Assuntos
Algoritmos , Núcleo Celular , Imunofluorescência , Humanos , Microscopia de Fluorescência , Coloração e Rotulagem
4.
IEEE Trans Image Process ; 28(1): 102-112, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30080148

RESUMO

Cross-modal retrieval is gaining importance due to the availability of large amounts of multimedia data. Hashing-based techniques provide an attractive solution to this problem when the data size is large. For cross-modal retrieval, data from the two modalities may be associated with a single label or multiple labels, and in addition, may or may not have a one-to-one correspondence. This work proposes a simple hashing framework which has the capability to work with different scenarios while effectively capturing the semantic relationship between the data items. The work proceeds in two stages in which the first stage learns the optimum hash codes by factorizing an affinity matrix, constructed using the label information. In the second stage, ridge regression and kernel logistic regression is used to learn the hash functions for mapping the input data to the bit domain. We also propose a novel iterative solution for cases where the training data is very large, or when the whole training data is not available at once. Extensive experiments on single label data set like Wiki and multi-label datasets like MirFlickr, NUS-WIDE, Pascal, and LabelMe, and comparisons with the state-of-the-art, shows the usefulness of the proposed approach.

5.
IEEE Trans Image Process ; 26(8): 3995-4004, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28541900

RESUMO

Cross-modal recognition and matching with privileged information are important challenging problems in the field of computer vision. The cross-modal scenario deals with matching across different modalities and needs to take care of the large variations present across and within each modality. The privileged information scenario deals with the situation that all the information available during training may not be available during the testing stage, and hence, algorithms need to leverage the extra information from the training stage itself. We show that for multi-modal data, either one of the above situations may arise if one modality is absent during testing. Here, we propose a novel framework, which can handle both these scenarios seamlessly with applications to matching multi-modal data. The proposed approach jointly uses data from the two modalities to build a canonical representation, which encompasses information from both the modalities. We explore four different types of canonical representations for different types of data. The algorithm computes dictionaries and canonical representation for data from both the modalities, such that the transformed sparse coefficients of both the modalities are equal to that of the canonical representation. The sparse coefficients are finally matched using Mahalanobis metric. Extensive experiments on different data sets, involving RGBD, text-image, and audio-image data, show the effectiveness of the proposed framework.

6.
IEEE Trans Image Process ; 25(8): 3826-37, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27295672

RESUMO

Coupled dictionary learning (CDL) has recently emerged as a powerful technique with wide variety of applications ranging from image synthesis to classification tasks. In this paper, we extend the existing CDL approaches in two aspects to make them more suitable for the task of cross-modal matching. Data coming from different modalities may or may not be paired. For example, for image-text retrieval problem, 100 images of a class are available as opposed to only 50 samples of text data for training. Current CDL approaches are not designed to handle such scenarios, where classes of data points in one modality correspond to classes of data points in the other modality. Given the data from the two modalities, first two dictionaries are learnt for the respective modalities, so that the data have a sparse representation with respect to their own dictionaries. Then, the sparse coefficients from the two modalities are transformed in such a manner that data from the same class are maximally correlated, while that from different classes have very less correlation. This way of modeling the coupling between the sparse representations of the two modalities makes this approach work seamlessly for paired as well as unpaired data. The discriminative coupling term also makes the approach better suited for classification tasks. Experiments on different publicly available cross-modal data sets, namely, CUHK photosketch face data set, HFB visible and near-infrared facial images data set, IXMAS multiview action recognition data set, wiki image and text data set and Multiple Features data set, show that this generalized CDL approach performs better than the state-of-the-art for both paired as well as unpaired data.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...